technovangelist / scripts / vs - using twinny

vs - using twinny

I have no idea what Twinny means, but I think so far it may be the best of the coding assistants in vscode. There is still some cleanup that needs to happen, but its getting there. It has all the features of Llama Coder and Continue combined, but it also goes a bit further to make it easier to use. So here I am in a little program I was writing to make it easy to search on Ollama.ai for models from the cli. If I want to search for all the models on the server that use mistral I can search for that. Or everything with mistral dolphin or dolphin mistral then I can do this. Pretty useful.

Now as I was building this, Llama Coder was giving me good suggestions, but sometimes I needed a bit more. So with Continue, I can select this code to cycle thru all the repos and look for search terms. And i get an overlay that says use command m to select or command shift L to edit. Command M will select and copy to a new Continue chat window and command shift L will bring up a command box for you to type a request in and it will put the results back in the code window. But I find I have to clean up a lot when that happens, and the chat window doesn’t always work. So there is a little friction there I hope they can resolve. I can also select some code, right click and choose a bunch of options, but this will also replace my code and again I have to do a bit of clean up with what comes out.

So let’s try this in Twinny. After enabling, I’ll restart vscode just to make sure everything is set. Sometimes when disabling and enabling some competing plugins I find a restart really helps. We’ll start off in the settings for the extension. Things are a little rough here. There is an API URL which is really just the hostname. Then the API path to the generate endpoint. Then there are the ports for chat and Fill in the Middle. Next we have the model names. If you change the model for infilling, then you should also set the template format. We can disable autosuggest which is interesting. You can set a context length which I changed from 10 lines before and after to 30. And then a wait to delay generating completions. You can set temperature. Its pretty cool that it will try to bring in the context of nearby documents. And then a completion cache is neat. You can specify max token limits and then a few other things that I don’t think anyone needs.

When I first made this video there was one big omission and that was the ability to set the Infilling template, so you had to stick with codellama. I reached out to Richard about that and as soon as he woke up he added a feature to adjust the templates for chat and the different functions, but not infilling. And then later in the day he added support for stable code for infilling. Deepseek coder still isn’t there but I wouldn’t be surprised if by the time you watch this, that is there. I still would love to the ability to use the next model that uses some other format that we don’t know yet, by defining the format ourselves, but this is a good next step.

Ok with that out of the way, lets start working with Twinny. Of course it has the tab completion that Llama Coder has, but it’s a bit slower because I have to work with CodeLlama which starts at 7 billion parameters. But whats really special is if I select this code and then right click, I have a bunch of choices at the bottom of the menu. And this is what I really love. It makes it so easy to add types, explain, generate docs, refactor, or write tests.

Continue also offers that, though the way they list it is a bit more confusing. But most of those options in Twinny are here in the sidebar as well. They surface as buttons. So after explaining, I can easily refactor, or write tests. I can do that in Continue, but its up to me to type out what I want rather than clicking a simple button. Sorry, sometimes I am just lazy. And everything always gets a preview in Twinny which you don’t always get in Continue.

I think this gets all the good things from both Llama Coder and Continue and may be the most perfect AI Assistant for VSCode so far. I just don’t like the model chosen and that I can’t use a model like Deepseek Coder.

Now some will say that deepseek-coder at 1.3 billion parameters or stable-code at 3 billion is nowhere near as good as codellama at 7 billion parameters. But to that my response is that it’s an apples to oranges comparison. There is a deepseek coder with 6.7 billion parameters. But, what I want in code completion isn’t a full solution to everything in my head. Instead I want help completing the line or function I am typing. And I need it quick. As long as the results are decent, then speed is the most important factor. And I don’t feel that I am actually giving up much in terms of quality. I think some of the most exciting developments in the next 6 to 12 months in this area is the improvement of the small models. The models that let you do amazing things without breaking a sweat on your laptop, or even run on a tiny device like a Raspberry Pi. Check out this video by Ian Wooten on doing that… AFTER finishing this one.

What are your thoughts on this? Is there another solution I should try? Privy is another one I have heard of but haven’t had a chance to get into yet. CodeGPT used to be a favorite but its been a while since using it. And what do you think of that ollama.ai model search tool? Would that be interesting to share? Let me know in the comments below. If you have paid attention to the comments in every other video I have, you know I spend a lot of time down there. And every idea folks have is in my list of ideas for future videos and I do expect to get to them all. This one was from a response on Twitter 2 days ago, and that last video was a response to a comment from 2 days before that. So your comments turn into videos pretty quick… I would also love to hear about the best thing you have found at the beach where ever you are.

I think I can stick with this cadence of a new video every Monday, Wednesday, and Friday for a long while to come, so like and subscribe to see them all as they come out.

Thanks so much for watching this. Goodbye.